543 research outputs found

    A partition methodology to develop data flow dominated embedded systems

    Get PDF
    Comunicação apresentada no International Workshop on Model-Based Methodologies for Pervasive and Embedded Software (MOMPES 2004), 1, Hamilton, Ontario, Canada, 15-18 June 2004.This paper proposes an automatic partition methodology oriented to develop data flow dominated embedded systems. The target architecture is CPU-based with reconfigurable devices on attached board(s), which closely matches the PSM meta-model applied to system modelling. A PSM flow graph was developed to represent the system during the partitioning process. The partitioning task applies known optimization algorithms - tabu search and cluster growth algorithms - which were enriched with new elements to reduce computation time and to achieve higher quality partition solutions. These include the closeness function that guides cluster growth algorithm, which dynamically adapts to the type of object and partition under analysis. The methodology was applied to two case studies, and some evaluation results are presented

    Quality-Aware Reactive Programming for the Internet of Things

    Get PDF
    © 2017, IFIP International Federation for Information Processing. The reactive paradigm recently became very popular in user-interface development: updates — such as the ones from the mouse, keyboard, or from the network — can trigger a chain of computations organised in a dependency graph, letting the underlying engine control the scheduling of these computations. In the context of the Internet of Things (IoT), typical applications deploy components in distributed nodes and link their interfaces, employing a publish-subscribe architecture. The paradigm for Distributed Reactive Programming marries these two concepts, treating each distributed component as a reactive computation. However, existing approaches either require expensive synchronisation mechanisms or they do not support pipelining, i.e., allowing multiple “waves” of updates to be executed in parallel. We propose Quarp (Quality-Aware Reactive Programming), a scalable and light-weight mechanism aimed at the IoT to orchestrate components triggered by updates of data-producing components or of aggregating components. This mechanism appends meta-information to messages between components capturing the context of the data, used to dynamically monitor and guarantee useful properties of the dynamic applications. These include the so-called glitch freedom, time synchronisation, and geographical proximity. We formalise Quarp using a simple operational semantics, provide concrete examples of useful instances of contexts, and situate our approach in the realm of distributed reactive programming.FCT - Fuel Cell Technologies Program(732505 LightKone)Partially financed by the personal FCT grant SFRH/BPD/91908/2012. The research leading to these results has received funding from the European Union’s Horizon 2020 - The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732505.Project “TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01- 0145-FEDER-000020” is financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).info:eu-repo/semantics/publishedVersio

    Data abstraction in coordination constraints

    Get PDF
    Communications in Computer and Information Science 393, 2013This paper studies complex coordination mechanisms based on constraint satisfaction. In particular, it focuses on data-sensitive connectors from the Reo coordination language. These connectors restrict how and where data can flow between loosely-coupled components taking into account the data being exchanged. Existing engines for Reo provide a very limited support for data-sensitive connectors, even though data constraints are captured by the original semantic models for Reo. When executing data-sensitive connectors, coordination constraints are not exhaustively solved at compile time but at runtime on a per-need basis, powered by an existing SMT (satisfiability modulo theories) solver.To deal with a wider range of data types and operations, we abstract data and reduce the original constraint satisfaction problem to a SAT problem, based on a variation of predicate abstraction. We show soundness and completeness of the abstraction mechanism for well-defined constraints, and validate our approach by evaluating the performance of a prototype implementation with different test cases, with and without abstraction.(undefined

    Proceedings Fifth Workshop on Formal Integrated Development Environment

    Get PDF
    F-IDE 2019 is the fifth international workshop on Formal Integrated Development Environment, held on October 7, 2019 in Porto, Portugal, as part of the FM 2019 World Congress on Formal Methods. High levels of safety, security and also privacy standards require the use of formal methods to specify and develop compliant software (sub)systems. Any standard comes with an assessment process, which requires a complete documentation of the application in order to ease the justification of design choices and the review of code and proofs. Ideally, an F-IDE dedicated to such developments should comply with several requirements. The first one is to associate a logical theory with a programming language, in a way that facilitates the tightly coupled handling of specification properties and program constructs. The second is to offer a language/environment simple enough to be usable by most developers, even if they are not fully acquainted with higher-order logics or set theory, in particular by making development of proofs as easy as possible. The third is to offer automated management of application documentation. It may also be expected that developments done with such an F-IDE are reusable and modular. Tools for testing and static analysis may be embedded within F-IDEs to support the assessment process. The workshop is a forum of exchange on different features related to F-IDEs. We solicited several kinds of contributions: research papers providing new concepts and results, position papers and research perspectives, experience reports, tool presentations. The workshop was open to contributions on all aspects of a system development process, including specification, design, implementation, analysis and documentation. The current edition is a one-day workshop with eight communications, offering a large variety of approaches, techniques and tools. Each submission was reviewed by three reviewers. We also had the honor of welcoming Wolfgang Ahrendt, from Chalmers University of Technology, who gave a keynote entitled What is KeY's key to software verification?info:eu-repo/semantics/publishedVersio

    A Procedure for Splitting Processes and its Application to Coordination

    Full text link
    We present a procedure for splitting processes in a process algebra with multi-actions (a subset of the specification language mCRL2). This splitting procedure cuts a process into two processes along a set of actions A: roughly, one of these processes contains no actions from A, while the other process contains only actions from A. We state and prove a theorem asserting that the parallel composition of these two processes equals the original process under appropriate synchronization. We apply our splitting procedure to the process algebraic semantics of the coordination language Reo: using this procedure and its related theorem, we formally establish the soundness of splitting Reo connectors along the boundaries of their (a)synchronous regions in implementations of Reo. Such splitting can significantly improve the performance of connectors.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432

    Web-based Interface for Tailored Time Series Analysis and Visualization

    Get PDF
    In the field of movement disorders there is a clear need for more accurate and insightful metrics to support clinicians in their decision-making process. These metrics should be easy to understand and get presented to the clinician in an intuitive way, to avoid creating a wall between clinicians and their patients. This could happen if the information system that presents these metrics requires too much interaction, if it doesn’t show the relevant data for each situation or even by delaying the information collection and its processing. Clinical Decision Support Information Systems have been growing in usage and usefulness, but still, there are cases of miss-implementations, that don’t put the user in the first place, or that don’t take into consideration their workflows. This leads to less effective health care and poorly spent resources. With this project we will produce a customizable web interface, to integrate such a system, that must have clinicians and their needs as the first priority. This interface has to be fast and intuitive, while, at the same time, present useful and clinically relevant metrics. Additional information will be presented as optional and non-intrusive, not requiring any input from the user unless he wants to interact with it

    COVID-19 risk perception and confidence among clinical dental students: impact on patient management

    Get PDF
    Communication abstract: Proceedings of the 5th International Congress of CiiEM - Reducing inequalities in Health and Society, held at Egas Moniz’ University Campus in Monte de Caparica, Almada, from June 16th to 18th, 2021.This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This study aimed to assess COVID-19 perceived risk, confidence and its impact on potentially infected patients’ management practices, in a clinical dental education setting. The survey was conducted by application of a self-administered questionnaire amid the COVID-19 pandemic. Results indicate high COVID-19 perceived risk and confidence levels (86.7% and 72.8%, respectively). A significantly lower risk perception was identified for individuals classifying COVID-19 as a moderately dangerous disease and confidence was significantly lower for women and for individuals not previously exposed to confirmed or suspected cases of COVID-19. No factor-related significant differences were found on potentially infected patients’ management practices.info:eu-repo/semantics/publishedVersio

    HEP-Frame: A software engineered framework to aid the development and efficient multicore execution of scientific code

    Get PDF
    This communication presents an evolutionary soft- ware prototype of a user-centered Highly Efficient Pipelined Framework, HEP-Frame, to aid the development of sustainable parallel scientific code with a flexible pipeline structure. HEP- Frame is the result of a tight collaboration between computational scientists and software engineers: it aims to improve scientists coding productivity, ensuring an efficient parallel execution on a wide set of multicore systems, with both HPC and HTC techniques. Current prototype complies with the requirements of an actual scientific code, includes desirable sustainability features and supports at compile time additional plugin interfaces for other scientific fields. The porting and development productivity was assessed and preliminary efficiency results are promising.This work was supported by FCT (Fundação para a Ciência e Tecnologia) within Project Scope (UID/CEC/00319/2013), by LIP (Laboratório de Instrumentação e Física Experimental de Partículas) and by Project Search-ON2 (NORTE-07-0162- FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework, through the European Regional Development Fund

    Tuning pipelined scientific data analyses for efficient multicore execution

    Get PDF
    Scientific data analyses often apply a pipelined sequence of computational tasks to independent datasets. Each task in the pipeline captures and processes a dataset element, may be dependent on other tasks in the pipeline, may have a different computational complexity and may be filtered out from progressing in the pipeline. The goal of this work is to develop an efficient scheduler that automatically (i) manages a parallel data reading and an adequate data structure creation, (ii) adaptively defines the most efficient order of pipeline execution of the tasks, considering their inter-dependence and both the filtering out rate and the computational weight, and (iii) manages the parallel execution of the computational tasks in a multicore system, applied to the same or to different dataset elements. A real case study data analysis application from High Energy Physics (HEP) was used to validate the efficiency of this scheduler. Preliminary results show an impressive performance improvement of the pipeline tuning when compared to the original sequential HEP code (up to a 35x speedup in a dual 12-core system), and also show significant performance speedups over conventional parallelization approaches of this case study application (up to 10x faster in the same system).Project Search-ON2 (NORTE-07-0162- FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework, through the European Regional Development Fund

    Removing inefficiencies from scientific code : the study of the Higgs boson couplings to top quarks

    Get PDF
    Publicado em "Computational science and its applications – ICCSA 2014 : proceedings", Series : Lecture notes in computer science, vol. 8582This paper presents a set of methods and techniques to remove inefficiencies in a data analysis application used in searches by the ATLAS Experiment at the Large Hadron Collider. Profiling scientific code helped to pinpoint design and runtime inefficiencies, the former due to coding and data structure design. The data analysis code used by groups doing searches in the ATLAS Experiment contributed to clearly identify some of these inefficiencies and to give suggestions on how to prevent and overcome those common situations in scientific code to improve the efficient use of available computational resources in a parallel homogeneous platform.This work is funded by National Funds through the FCT - Fundaçãoo para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project PEst-OE/EEI/UI0752/2014, by LIP (Laborat ́orio de Instrumentação e Física Experimental de Partículas), and the SeARCH cluster (REEQ/443/EEI/2005)
    corecore